4 research outputs found

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    MAP Estimators for Self-Similar Sparse Stochastic Models

    No full text
    We consider the reconstruction of multi-dimensional signals from noisy samples. The problem is formulated within the framework of the theory of continuous-domain sparse stochastic processes. In particular, we study the fractional Laplacian as the whitening operator specifying the correlation structure of the model. We then derive a class of MAP estimators where the priors are confined to the family of infinitely divisible distributions. Finally, we provide simulations where the derived estimators are compared against total-variation (TV) denoising

    Optical Tomographic Image Reconstruction Based on Beam Propagation and Sparse Regularization

    No full text
    Optical tomographic imaging requires an accurate forward model as well as regularization to mitigate missing-data artifacts and to suppress noise. Nonlinear forward models can provide more accurate interpretation of the measured data than their linear counterparts, but they generally result in computationally prohibitive reconstruction algorithms. Although sparsity-driven regularizers significantly improve the quality of reconstructed image, they further increase the computational burden of imaging. In this paper, we present a novel iterative imaging method for optical tomography that combines a nonlinear forward model based on the beam propagation method (BPM) with an edge-preserving three-dimensional (3-D) total variation (TV) regularizer. The central element of our approach is a time-reversal scheme, which allows for an efficient computation of the derivative of the transmitted wave-field with respect to the distribution of the refractive index. This time-reversal scheme together with our stochastic proximal-gradient algorithm makes it possible to optimize under a nonlinear forward model in a computationally tractable way, thus enabling a high-quality imaging of the refractive index throughout the object. We demonstrate the effectiveness of our method through several experiments on simulated and experimentally measured data
    corecore